Project 4: AutoStitching Photo Mosaics¶

By Jathin Korrapati¶

Part A: Image Warping and Mosaicing¶

Part 0: Image Pre-processing¶

These are the base images I took from my room:¶

Some BWW images:¶

No description has been provided for this image
No description has been provided for this image

Part 1: Recover Homographies¶

For this part, we must find a way to map the correspondences between the source points and the target points, and used the SVD approach (which seemed to yield better results for me for some reason), where two vectors for each pair of correspondence points:¶

H_x = (-x_i, -y_i, -1, 0, 0, 0, x_j * x_i, x_j * y_i, x_j)¶

H_y = (0, 0, 0, -x_i, -y_i, -1, y_j * x_i, y_j * y_i, y_j)¶

After this, the vectors are stacked into a matrix A, which we then compute the SVD of to get S, U, and Vt. The last row of Vt corresponds to the smallest singular value and contains the elements of the flattened homography matrix. Then, we simply reshape it into a 3x3 matrix, and then scale it downwards to normalize the transform back for our Homography Matrix.¶

Here are each of my images labeled with corresponding points:¶

No description has been provided for this image
No description has been provided for this image

Part 2: Image Warping¶

Now, given the homography and an image, we can warp the perspective of our input image to the target perspective. In our input to warpImage, our inputs are defined as the image we want to warp, and its corresponding homography to the target image. We first compute the new corners of the image by applying the Homography matrix on the old corners of the image to create our bounding box. Then, I determine the pixel locations of the warped image and apply the inverse homography matrix to find the original points of our input image, and then apply our inverse warp to fill the corresponding pixels.¶

Part 3: Image Rectification¶

Using our warp and homography function from before, we can now take a picture of a rectangular object and warp it to the parallel plane in order to match it towards the front frame. We can manually set the correspondences to make the rectangle/square face the parallel plane.¶

Here is my notebook and its rectification:¶
No description has been provided for this image
Here is my mouse and its rectification:¶
No description has been provided for this image

Part 4: Mosaic Blending¶

To blend my mosaics, I take a naive approach and then apply simple alpha blending in order to normalize the colors between the overlapping regions better. The first thing I do is calculate the overlap, and then blend the images together. But that led to some strange artifacts, so then I apply alpha blending in order to smooth out the intersection region. This seems to work decently well, as it seems to smooth it out more. Any refractions or projections seem to stem from slightly inconsistent lighting and slight miscorrespondences using the tool.¶

No description has been provided for this image
No description has been provided for this image
No description has been provided for this image

What I learned¶

I thought it was so cool to see how warping images from one perspective to another would lead to us being able to create auto-stitching with manual correspondences that we decide would be best to create a panorama. Image warping was definitely the coolest part, seeing how images adapt from their base perspective to the perspective of another image.¶

Part B: Feature Matching for Autostitching¶

Part 1: Harris Corner Detector¶

To detect the corners using the Harris Corner Detector, I used the harris_corners.py function to focus on the corners of the image. To detect the corners, we use the peaks in the marix by utilizing Gaussian derivative filters as the corner points and plot them below.¶

<>:48: SyntaxWarning: assertion is always true, perhaps remove parentheses?
<>:48: SyntaxWarning: assertion is always true, perhaps remove parentheses?
C:\Users\jkorr\AppData\Local\Temp\ipykernel_37304\2641647646.py:48: SyntaxWarning: assertion is always true, perhaps remove parentheses?
  assert(dimx == dimc, 'Data dimension does not match dimension of centers')
No description has been provided for this image

Part 2: Adaptive Non-Maximal Suppression¶

The thing with the Harris points detector is that the points returned are very clustered together, without much variance across the image, not really highlighting anything. But, using Adaptive Non-Maximal Suppression, or (ANMS). This works by calculating the suppression radius for each of the features in the image, which is defined as the smallest distance to other feature points in the image. We select the n features with the highest radius and select them to return and plot. I set c_{robust} = 0.6 and got the top 500 points.¶

400
No description has been provided for this image

Part 3: Feature Descriptor Extraction¶

For the feature descriptor extraction, we extract the 40x40 region of the image around each point, rescale it to 8x8 pixels, and then normalize the mean to be equal to 0 (centering), and the standard deviation to be 1. After processing these parts, the next step was to calculate the feature matching between the left and right images that we have. First, I get the nearest two neighbors and then use Lowe's trick to calculate the ratio d = NN_1 / NN_2 and then set my threshold of points to keep as < 0.1 to get my matches. The points for my room are displayed below:¶

No description has been provided for this image

Part 4: RANSAC¶

No description has been provided for this image

Part 5: Autostitching vs Manual Stitching¶

In implementing the RANSAC algorithm, it chooses 4 random points on the matrix for correspondences and then computes the H matrix. Then the H matrix is applied from the base image to the corresponding image that its warped to in perspective and counts the number of inlier points. I chose a base threshold of 8. I looped through the actual algorithm 2000 times and take the largest set of inliers and use least squares to find the best H matrix.¶

No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image

What I learned¶

I thought this project was really cool, particularly how in creating panoramic images, we are able to generate code to actually choose the correspondence points for us between two different images, and then computing the Homography matrix with the RANSAC algorithm. It was really cool to see how well it performed, and very interesting to see what points the algorithm itself chose (and how they were similar/different to the points I chose in my manual correspondences).¶